200 research outputs found
A minimum swept-volume metric structure for configuration space
Borrowing elementary ideas from solid mechanics and differential geometry,
this presentation shows that the volume swept by a regular solid undergoing a
wide class of volume-preserving deformations induces a rather natural metric
structure with well-defined and computable geodesics on its configuration
space. This general result applies to concrete classes of articulated objects
such as robot manipulators, and we demonstrate as a proof of concept the
computation of geodesic paths for a free flying rod and planar robotic arms as
well as their use in path planning with many obstacles
On the mechanical contribution of head stabilization to passive dynamics of anthropometric walkers
During the steady gait, humans stabilize their head around the vertical
orientation. While there are sensori-cognitive explanations for this
phenomenon, its mechanical e fect on the body dynamics remains un-explored. In
this study, we take profit from the similarities that human steady gait share
with the locomotion of passive dynamics robots. We introduce a simplified
anthropometric D model to reproduce a broad walking dynamics. In a previous
study, we showed heuristically that the presence of a stabilized head-neck
system significantly influences the dynamics of walking. This paper gives new
insights that lead to understanding this mechanical e fect. In particular, we
introduce an original cart upper-body model that allows to better understand
the mechanical interest of head stabilization when walking, and we study how
this e fect is sensitive to the choice of control parameters
Walking Paths to and from a Goal Differ: On the Role of Bearing Angle in the Formation of Human Locomotion Paths
The path that humans take while walking to a goal is the result of a cognitive process modulated by the perception of the environment and physiological constraints. The path shape and timing implicitly embeds aspects of the architecture behind this process. Here, locomotion paths were investigated during a simple task of walking to and from a goal, by looking at the evolution of the position of the human on a horizontal (x,y) plane. We found that the path while walking to a goal was not the same as that while returning from it. Forward-return paths were systematically separated by 0.5-1.9m, or about 5% of the goal distance. We show that this path separation occurs as a consequence of anticipating the desired body orientation at the goal while keeping the target in view. The magnitude of this separation was strongly influenced by the bearing angle (difference between body orientation and angle to goal) and the final orientation imposed at the goal. This phenomenon highlights the impact of a trade-off between a directional perceptual apparatus-eyes in the head on the shoulders-and and physiological limitations, in the formation of human locomotion paths. Our results give an insight into the influence of environmental and perceptual variables on human locomotion and provide a basis for further mathematical study of these mechanisms
A motion planner for nonholonomic mobile robots
This paper considers the problem of motion planning for a car-like robot (i.e., a mobile robot with a nonholonomic constraint whose turning radius is lower-bounded). We present a fast and exact planner for our mobile robot model, based upon recursive subdivision of a collision-free path generated by a lower-level geometric planner that ignores the motion constraints. The resultant trajectory is optimized to give a path that is of near-minimal length in its homotopy class. Our claims of high speed are supported by experimental results for implementations that assume a robot moving amid polygonal obstacles. The completeness and the complexity of the algorithm are proven using an appropriate metric in the configuration space R^2 x S^1 of the robot. This metric is defined by using the length of the shortest paths in the absence of obstacles as the distance between two configurations. We prove that the new induced topology and the classical one are the same. Although we concentrate upon the car-like robot, the generalization of these techniques leads to new theoretical issues involving sub-Riemannian geometry and to practical results for nonholonomic motion planning
Decidability in robot manipulation planning
Consider the problem of planning collision-free motion of n objects movable through contact with a robot that can autonomously translate in the plane and that can move a maximum of m †n objects simultaneously. This represents the abstract formulation of a general class of manipulation planning problems that are proven to be decidable in this paper. The tools used for proving decidability of this simplified manipulation planning problem are, in fact, general enough to handle the decidability problem for the wider class of systems characterized by a stratified configuration space. These include, e.g., problems of legged and multi-contact locomotion, bi-manual manipulation. In addition, the approach described does not restrict the dynamics of the manipulation system modeled
A minimum swept-volume metric structure for configuration space
Borrowing elementary ideas from solid mechanics and differential geometry, this presentation shows that the volume swept by a regular solid undergoing a wide class of volume-preserving deformations induces a rather natural metric structure with well-defined and computable geodesics on its configuration space. This general result applies to concrete classes of articulated objects such as robot manipulators, and we demonstrate as a proof of concept the computation of geodesic paths for a free flying rod and planar robotic arms as well as their use in path planning with many obstacles
Learning Obstacle Representations for Neural Motion Planning
Motion planning and obstacle avoidance is a key challenge in robotics
applications. While previous work succeeds to provide excellent solutions for
known environments, sensor-based motion planning in new and dynamic
environments remains difficult. In this work we address sensor-based motion
planning from a learning perspective. Motivated by recent advances in visual
recognition, we argue the importance of learning appropriate representations
for motion planning. We propose a new obstacle representation based on the
PointNet architecture and train it jointly with policies for obstacle
avoidance. We experimentally evaluate our approach for rigid body motion
planning in challenging environments and demonstrate significant improvements
of the state of the art in terms of accuracy and efficiency.Comment: CoRL 2020. See the project webpage at
https://www.di.ens.fr/willow/research/nmp_repr
A motion planner for nonholonomic mobile robots
This paper considers the problem of motion planning for a car-like robot (i.e., a mobile robot with a nonholonomic constraint whose turning radius is lower-bounded). We present a fast and exact planner for our mobile robot model, based upon recursive subdivision of a collision-free path generated by a lower-level geometric planner that ignores the motion constraints. The resultant trajectory is optimized to give a path that is of near-minimal length in its homotopy class. Our claims of high speed are supported by experimental results for implementations that assume a robot moving amid polygonal obstacles. The completeness and the complexity of the algorithm are proven using an appropriate metric in the configuration space R^2 x S^1 of the robot. This metric is defined by using the length of the shortest paths in the absence of obstacles as the distance between two configurations. We prove that the new induced topology and the classical one are the same. Although we concentrate upon the car-like robot, the generalization of these techniques leads to new theoretical issues involving sub-Riemannian geometry and to practical results for nonholonomic motion planning
Learning Obstacle Representations for Neural Motion Planning
International audienceMotion planning and obstacle avoidance is a key challenge in robotics applications. While previous work succeeds to provide excellent solutions for known environments, sensor-based motion planning in new and dynamic environments remains difficult. In this work we address sensor-based motion planning from a learning perspective. Motivated by recent advances in visual recognition, we argue the importance of learning appropriate representations for motion planning. We propose a new obstacle representation based on the PointNet architecture [1] and train it jointly with policies for obstacle avoidance. We experimentally evaluate our approach for rigid body motion planning in challenging environments and demonstrate significant improvements of the state of the art in terms of accuracy and efficiency
- âŠ